Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Trials ; 25(1): 247, 2024 Apr 09.
Artigo em Inglês | MEDLINE | ID: mdl-38594753

RESUMO

BACKGROUND: Brain-derived neurotrophic factor (BDNF) is essential for antidepressant treatment of major depressive disorder (MDD). Our repeated studies suggest that DNA methylation of a specific CpG site in the promoter region of exon IV of the BDNF gene (CpG -87) might be predictive of the efficacy of monoaminergic antidepressants such as selective serotonin reuptake inhibitors (SSRIs), serotonin-norepinephrine reuptake inhibitors (SNRIs), and others. This trial aims to evaluate whether knowing the biomarker is non-inferior to treatment-as-usual (TAU) regarding remission rates while exhibiting significantly fewer adverse events (AE). METHODS: The BDNF trial is a prospective, randomized, rater-blinded diagnostic study conducted at five university hospitals in Germany. The study's main hypothesis is that {1} knowing the methylation status of CpG -87 is non-inferior to not knowing it with respect to the remission rate while it significantly reduces the AE rate in patients experiencing at least one AE. The baseline assessment will occur upon hospitalization and a follow-up assessment on day 49 (± 3). A telephone follow-up will be conducted on day 70 (± 3). A total of 256 patients will be recruited, and methylation will be evaluated in all participants. They will be randomly assigned to either the marker or the TAU group. In the marker group, the methylation results will be shared with both the patient and their treating physician. In the TAU group, neither the patients nor their treating physicians will receive the marker status. The primary endpoints include the rate of patients achieving remission on day 49 (± 3), defined as a score of ≤ 10 on the Hamilton Depression Rating Scale (HDRS-24), and the occurrence of AE. ETHICS AND DISSEMINATION: The trial protocol has received approval from the Institutional Review Boards at the five participating universities. This trial holds significance in generating valuable data on a predictive biomarker for antidepressant treatment in patients with MDD. The findings will be shared with study participants, disseminated through professional society meetings, and published in peer-reviewed journals. TRIAL REGISTRATION: German Clinical Trial Register DRKS00032503. Registered on 17 August 2023.


Assuntos
Fator Neurotrófico Derivado do Encéfalo , Transtorno Depressivo Maior , Humanos , Fator Neurotrófico Derivado do Encéfalo/genética , Transtorno Depressivo Maior/diagnóstico , Transtorno Depressivo Maior/tratamento farmacológico , Transtorno Depressivo Maior/genética , Estudos Prospectivos , Antidepressivos/efeitos adversos , Inibidores Seletivos de Recaptação de Serotonina , Metilação , Biomarcadores
2.
BMC Bioinformatics ; 24(1): 121, 2023 Mar 28.
Artigo em Inglês | MEDLINE | ID: mdl-36978010

RESUMO

BACKGROUND: In recent years, advances in high-throughput sequencing technologies have enabled the use of genomic information in many fields, such as precision medicine, oncology, and food quality control. The amount of genomic data being generated is growing rapidly and is expected to soon surpass the amount of video data. The majority of sequencing experiments, such as genome-wide association studies, have the goal of identifying variations in the gene sequence to better understand phenotypic variations. We present a novel approach for compressing gene sequence variations with random access capability: the Genomic Variant Codec (GVC). We use techniques such as binarization, joint row- and column-wise sorting of blocks of variations, as well as the image compression standard JBIG for efficient entropy coding. RESULTS: Our results show that GVC provides the best trade-off between compression and random access compared to the state of the art: it reduces the genotype information size from 758 GiB down to 890 MiB on the publicly available 1000 Genomes Project (phase 3) data, which is 21% less than the state of the art in random-access capable methods. CONCLUSIONS: By providing the best results in terms of combined random access and compression, GVC facilitates the efficient storage of large collections of gene sequence variations. In particular, the random access capability of GVC enables seamless remote data access and application integration. The software is open source and available at https://github.com/sXperfect/gvc/ .


Assuntos
Compressão de Dados , Compressão de Dados/métodos , Algoritmos , Estudo de Associação Genômica Ampla , Genômica/métodos , Software , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Análise de Sequência de DNA/métodos
3.
Bioinformatics ; 36(7): 2275-2277, 2020 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-31830243

RESUMO

MOTIVATION: In an effort to provide a response to the ever-expanding generation of genomic data, the International Organization for Standardization (ISO) is designing a new solution for the representation, compression and management of genomic sequencing data: the Moving Picture Experts Group (MPEG)-G standard. This paper discusses the first implementation of an MPEG-G compliant entropy codec: GABAC. GABAC combines proven coding technologies, such as context-adaptive binary arithmetic coding, binarization schemes and transformations, into a straightforward solution for the compression of sequencing data. RESULTS: We demonstrate that GABAC outperforms well-established (entropy) codecs in a significant set of cases and thus can serve as an extension for existing genomic compression solutions, such as CRAM. AVAILABILITY AND IMPLEMENTATION: The GABAC library is written in C++. We also provide a command line application which exercises all features provided by the library. GABAC can be downloaded from https://github.com/mitogen/gabac. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Assuntos
Compressão de Dados , Sequenciamento de Nucleotídeos em Larga Escala , Genoma , Genômica , Software
4.
J Comput Biol ; 25(10): 1141-1151, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-30059248

RESUMO

Previous studies on quality score compression can be classified into two main lines: lossy schemes and lossless schemes. Lossy schemes enable a better management of computational resources. Thus, in practice, and for preliminary analyses, bioinformaticians may prefer to work with a lossy quality score representation. However, the original quality scores might be required for a deeper analysis of the data. Hence, it might be necessary to keep them; in addition to lossy compression this requires lossless compression as well. We developed a space-efficient hierarchical representation of quality scores, QScomp, which allows the users to work with lossy quality scores in routine analysis, without sacrificing the capability of reaching the original quality scores when further investigations are required. Each quality score is represented by a tuple through a novel decomposition. The first and second dimensions of these tuples are separately compressed such that the first-level compression is a lossy scheme. The compressed information of the second dimension allows the users to extract the original quality scores. Experiments on real data reveal that the downstream analysis with the lossy part-spending only 0.49 bits per quality score on average-shows a competitive performance, and that the total space usage with the inclusion of the compressed second dimension is comparable to the performance of competing lossless schemes.


Assuntos
Algoritmos , Compressão de Dados/métodos , Compressão de Dados/normas , Variação Genética , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Análise de Sequência de DNA/normas , Genômica , Humanos
5.
Bioinformatics ; 34(10): 1650-1658, 2018 05 15.
Artigo em Inglês | MEDLINE | ID: mdl-29186284

RESUMO

Motivation: Recent advancements in high-throughput sequencing technology have led to a rapid growth of genomic data. Several lossless compression schemes have been proposed for the coding of such data present in the form of raw FASTQ files and aligned SAM/BAM files. However, due to their high entropy, losslessly compressed quality values account for about 80% of the size of compressed files. For the quality values, we present a novel lossy compression scheme named CALQ. By controlling the coarseness of quality value quantization with a statistical genotyping model, we minimize the impact of the introduced distortion on downstream analyses. Results: We analyze the performance of several lossy compressors for quality values in terms of trade-off between the achieved compressed size (in bits per quality value) and the Precision and Recall achieved after running a variant calling pipeline over sequencing data of the well-known NA12878 individual. By compressing and reconstructing quality values with CALQ, we observe a better average variant calling performance than with the original data while achieving a size reduction of about one order of magnitude with respect to the state-of-the-art lossless compressors. Furthermore, we show that CALQ performs as good as or better than the state-of-the-art lossy compressors in terms of variant calling Recall and Precision for most of the analyzed datasets. Availability and implementation: CALQ is written in C ++ and can be downloaded from https://github.com/voges/calq. Contact: voges@tnt.uni-hannover.de or mhernaez@illinois.edu. Supplementary information: Supplementary data are available at Bioinformatics online.


Assuntos
Compressão de Dados/métodos , Genômica/métodos , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Software , Algoritmos , Humanos , Modelos Estatísticos , Alinhamento de Sequência , Análise de Sequência de DNA/métodos
6.
Nat Methods ; 13(12): 1005-1008, 2016 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-27776113

RESUMO

High-throughput sequencing (HTS) data are commonly stored as raw sequencing reads in FASTQ format or as reads mapped to a reference, in SAM format, both with large memory footprints. Worldwide growth of HTS data has prompted the development of compression methods that aim to significantly reduce HTS data size. Here we report on a benchmarking study of available compression methods on a comprehensive set of HTS data using an automated framework.


Assuntos
Biologia Computacional/métodos , Compressão de Dados/métodos , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Animais , Cacau/genética , Drosophila melanogaster/genética , Escherichia coli/genética , Humanos , Pseudomonas aeruginosa/genética
7.
Proc Data Compress Conf ; 2016: 221-230, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-28845445

RESUMO

This paper provides the specification and an initial validation of an evaluation framework for the comparison of lossy compressors of genome sequencing quality values. The goal is to define reference data, test sets, tools and metrics that shall be used to evaluate the impact of lossy compression of quality values on human genome variant calling. The functionality of the framework is validated referring to two state-of-the-art genomic compressors. This work has been spurred by the current activity within the ISO/IEC SC29/WG11 technical committee (a.k.a. MPEG), which is investigating the possibility of starting a standardization activity for genomic information representation.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...